Towards Sentence-Level Brain Decoding with Distributed Representations
نویسندگان
چکیده
منابع مشابه
Decoding Sentiment from Distributed Representations of Sentences
Distributed representations of sentences have been developed recently to represent their meaning as real-valued vectors. However, it is not clear how much information such representations retain about the polarity of sentences. To study this question, we decode sentiment from sentence representations learned with different architectures (sensitive to the order of words, the order of sentences, ...
متن کاملLearning Deep Temporal Representations for Brain Decoding
Functional magnetic resonance imaging produces high dimensional data, with a less then ideal number of labelled samples for brain decoding tasks (predicting brain states). In this study, we propose a new deep temporal convolutional neural network architecture with spatial pooling for brain decoding which aims to reduce dimensionality of feature space along with improved classification performan...
متن کاملTowards a Model Theory for Distributed Representations
Distributed representations (such as those based on embed-dings) and discrete representations (such as those based on logic) have complementary strengths. We explore one possible approach to combining these two kinds of representations. We present a model theory/semantics for first order logic based on vectors of reals. We describe the model theory , discuss some interesting properties of such ...
متن کاملMassed/Distributed Sentence Writing: Post Tasks of Noticing Activity
The purpose of the study was to activate the passive lexical knowledge through noticing and to investigate the effect of sentence writing as the post task of noticing activity on strengthening the effect of noticing. Forty-two Iranian female adult upper-intermediate English students of a state university in 2 homogenous groups participated in noticing the lexical items whose production were not...
متن کاملLearning Visually Grounded Sentence Representations
We introduce a variety of models, trained on a supervised image captioning corpus to predict the image features for a given caption, to perform sentence representation grounding. We train a grounded sentence encoder that achieves good performance on COCO caption and image retrieval and subsequently show that this encoder can successfully be transferred to various NLP tasks, with improved perfor...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the AAAI Conference on Artificial Intelligence
سال: 2019
ISSN: 2374-3468,2159-5399
DOI: 10.1609/aaai.v33i01.33017047